99 research outputs found

    Fast Primal-Dual Gradient Method for Strongly Convex Minimization Problems with Linear Constraints

    Full text link
    In this paper we consider a class of optimization problems with a strongly convex objective function and the feasible set given by an intersection of a simple convex set with a set given by a number of linear equality and inequality constraints. A number of optimization problems in applications can be stated in this form, examples being the entropy-linear programming, the ridge regression, the elastic net, the regularized optimal transport, etc. We extend the Fast Gradient Method applied to the dual problem in order to make it primal-dual so that it allows not only to solve the dual problem, but also to construct nearly optimal and nearly feasible solution of the primal problem. We also prove a theorem about the convergence rate for the proposed algorithm in terms of the objective function and the linear constraints infeasibility.Comment: Submitted for DOOR 201

    Back-hopping in Spin-Transfer-Torque switching of perpendicularly magnetized tunnel junctions

    Full text link
    We analyse the phenomenon of back-hopping in spin-torque induced switching of the magnetization in perpendicularly magnetized tunnel junctions. The analysis is based on single-shot time-resolved conductance measurements of the pulse-induced back-hopping. Studying several material variants reveals that the back-hopping is a feature of the nominally fixed system of the tunnel junction. The back-hopping is found to proceed by two sequential switching events that lead to a final state P' of conductance close to --but distinct from-- that of the conventional parallel state. The P' state does not exist at remanence. It generally relaxes to the conventional antiparallel state if the current is removed. The P' state involves a switching of the sole spin-polarizing part of the fixed layers. The analysis of literature indicates that back-hopping occurs only when the spin-polarizing layer is too weakly coupled to the rest of the fixed system, which justifies a posteriori the mitigation strategies of back-hopping that were implemented empirically in spin-transfer-torque magnetic random access memories.Comment: submitted to Phys Rev.

    Gradient methods for problems with inexact model of the objective

    Get PDF
    We consider optimization methods for convex minimization problems under inexact information on the objective function. We introduce inexact model of the objective, which as a particular cases includes inexact oracle [19] and relative smoothness condition [43]. We analyze gradient method which uses this inexact model and obtain convergence rates for convex and strongly convex problems. To show potential applications of our general framework we consider three particular problems. The first one is clustering by electorial model introduced in [49]. The second one is approximating optimal transport distance, for which we propose a Proximal Sinkhorn algorithm. The third one is devoted to approximating optimal transport barycenter and we propose a Proximal Iterative Bregman Projections algorithm. We also illustrate the practical performance of our algorithms by numerical experiments

    Lifespan extension and the doctrine of double effect

    Get PDF
    Recent developments in biogerontology—the study of the biology of ageing—suggest that it may eventually be possible to intervene in the human ageing process. This, in turn, offers the prospect of significantly postponing the onset of age-related diseases. The biogerontological project, however, has met with strong resistance, especially by deontologists. They consider the act of intervening in the ageing process impermissible on the grounds that it would (most probably) bring about an extended maximum lifespan—a state of affairs that they deem intrinsically bad. In a bid to convince their deontological opponents of the permissibility of this act, proponents of biogerontology invoke an argument which is grounded in the doctrine of double effect. Surprisingly, their argument, which we refer to as the ‘double effect argument’, has gone unnoticed. This article exposes and critically evaluates this ‘double effect argument’. To this end, we first review a series of excerpts from the ethical debate on biogerontology in order to substantiate the presence of double effect reasoning. Next, we attempt to determine the role that the ‘double effect argument’ is meant to fulfil within this debate. Finally, we assess whether the act of intervening in ageing actually can be justified using double effect reasoning

    Advances in low-memory subgradient optimization

    Get PDF
    One of the main goals in the development of non-smooth optimization is to cope with high dimensional problems by decomposition, duality or Lagrangian relaxation which greatly reduces the number of variables at the cost of worsening differentiability of objective or constraints. Small or medium dimensionality of resulting non-smooth problems allows to use bundle-type algorithms to achieve higher rates of convergence and obtain higher accuracy, which of course came at the cost of additional memory requirements, typically of the order of n2, where n is the number of variables of non-smooth problem. However with the rapid development of more and more sophisticated models in industry, economy, finance, et all such memory requirements are becoming too hard to satisfy. It raised the interest in subgradient-based low-memory algorithms and later developments in this area significantly improved over their early variants still preserving O(n) memory requirements. To review these developments this chapter is devoted to the black-box subgradient algorithms with the minimal requirements for the storage of auxiliary results, which are necessary to execute these algorithms. To provide historical perspective this survey starts with the original result of N.Z. Shor which opened this field with the application to the classical transportation problem. The theoretical complexity bounds for smooth and non-smooth convex and quasi-convex optimization problems are briefly exposed in what follows to introduce to the relevant fundamentals of non-smooth optimization. Special attention in this section is given to the adaptive step-size policy which aims to attain lowest complexity bounds. Unfortunately the non-differentiability of objective function in convex optimization essentially slows down the theoretical low bounds for the rate of convergence in subgradient optimization compared to the smooth case but there are different modern techniques that allow to solve non-smooth convex optimization problems faster then dictate lower complexity bounds. In this work the particular attention is given to Nesterov smoothing technique, Nesterov Universal approach, and Legendre (saddle point) representation approach. The new results on Universal Mirror Prox algorithms represent the original parts of the survey. To demonstrate application of non-smooth convex optimization algorithms for solution of huge-scale extremal problems we consider convex optimization problems with non-smooth functional constraints and propose two adaptive Mirror Descent methods. The first method is of primal-dual variety and proved to be optimal in terms of lower oracle bounds for the class of Lipschitz-continuous convex objective and constraints. The advantages of application of this method to sparse Truss Topology Design problem are discussed in certain details. The second method can be applied for solution of convex and quasi-convex optimization problems and is optimal in a sense of complexity bounds. The conclusion part of the survey contains the important references that characterize recent developments of non-smooth convex optimization
    • 

    corecore